Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Food Chem ; 443: 138569, 2024 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-38306906

RESUMO

Zearalenone (ZEN) is a non-steroidal estrogenic mycotoxin and seriously threatens food safety, which requires rapid and sensitive detection methods for monitoring ZEN in agro-products. Herein, an alkaline phosphatase-tagged single-chain variable fragment fusion protein (ALP-scFv) was used as a bifunctional tracer to develop a colorimetric enzyme immunoassay (CEIA) and a chemiluminescent enzyme immunoassay (CLEIA) for ZEN. In addition, the interactions between scFv and ZEN were exploited by computer-assisted simulation, and four key amino acid sites were preliminarily identified. After optimization, the CEIA and CLEIA exhibited a limit of detection of 0.02 and 0.006 ng/mL, respectively. Furthermore, both methods showed favorable accuracy in recovery experiments and good selectivity in cross reactions. Moreover, the detection results of the actual samples from both methods correlated well with those from high-performance liquid chromatography. Overall, the ALP-scFv fusion tracer-based CEIA and CLEIA are demonstrated as reliable tools for ZEN detection in food.


Assuntos
Anticorpos de Cadeia Única , Zearalenona , Fosfatase Alcalina/metabolismo , Zearalenona/análise , Colorimetria , Técnicas Imunoenzimáticas , Corantes/análise , Contaminação de Alimentos/análise , Imunoensaio/métodos
2.
Transl Vis Sci Technol ; 9(2): 61, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33329940

RESUMO

Purpose: To automate the segmentation of retinal layers, we propose DeepRetina, a method based on deep neural networks. Methods: DeepRetina uses the improved Xception65 to extract and learn the characteristics of retinal layers. The Xception65-extracted feature maps are inputted to an atrous spatial pyramid pooling module to obtain multiscale feature information. This information is then recovered to capture clearer retinal layer boundaries in the encoder-decoder module, thus completing retinal layer auto-segmentation of the retinal optical coherence tomography (OCT) images. Results: We validated this method using a retinal OCT image database containing 280 volumes (40 B-scans per volume) to demonstrate its effectiveness. The results showed that the method exhibits excellent performance in terms of the mean intersection over union and sensitivity (Se), which are as high as 90.41 and 92.15%, respectively. The intersection over union and Se values of the nerve fiber layer, ganglion cell layer, inner plexiform layer, inner nuclear layer, outer plexiform layer, outer nuclear layer, outer limiting membrane, photoreceptor inner segment, photoreceptor outer segment, and pigment epithelium layer were found to be above 88%. Conclusions: DeepRetina can automate the segmentation of retinal layers and has great potential for the early diagnosis of fundus retinal diseases. In addition, our approach will provide a segmentation model framework for other types of tissues and cells in clinical practice. Translational Relevance: Automating the segmentation of retinal layers can help effectively diagnose and monitor clinical retinal diseases. In addition, it requires only a small amount of manual segmentation, significantly improving work efficiency.


Assuntos
Aprendizado Profundo , Doenças Retinianas , Humanos , Retina/diagnóstico por imagem , Tomografia de Coerência Óptica
3.
Med Phys ; 47(9): 4212-4222, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32583463

RESUMO

PURPOSE: To automate the detection and identification of visible components in feces for early diagnosis of gastrointestinal diseases, we propose FecalNet, a method using multiple deep neural networks. METHODS: FecalNet uses the ResNet152 residual network to extract and learn the characteristics of visible components in fecal microscopic images, acquire feature maps in combination with the feature pyramid network, apply the full convolutional network to classify and locate the fecal components, and implement the improved focal loss function to reoptimize the classification results. This allowed the complete automation of the detection and identification of the visible components in feces. RESULTS: We validated this method using a fecal database of 1,122 patients. The results indicated a mean average precision (mAP) of 92.16% and an average recall (AR) of 93.56%. The average precision (AP) and AR of erythrocyte, leukocyte, intestinal mucosal epithelial cells, hookworm eggs, ascarid eggs, and whipworm eggs were 92.82% and 93.38%, 93.99% and 96.11%, 90.71% and 92.41%, 89.95% and 93.88%, 96.90% and 91.21%, and 88.61% and 94.37%, respectively. The average times required by the GPU and the CPU to analyze a fecal microscopic image are approximately 0.14 and 1.02 s, respectively. CONCLUSION: FecalNet can automate the detection and identification of visible components in feces. It also provides a detection and identification framework for detecting several other types of cells in clinical practice.


Assuntos
Aprendizado Profundo , Fezes , Humanos , Leucócitos , Microscopia , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA